18,314 research outputs found
Busemann functions and barrier functions
We show that Busemann functions on a smooth, non-compact, complete,
boundaryless, connected Riemannian manifold are viscosity solutions with
respect to the Hamilton-Jacobi equation determined by the Riemannian metric and
consequently they are locally semi-concave with linear modulus. We also
analysis the structure of singularity sets of Busemann functions. Moreover we
study barrier functions, which are analogues to Mather's barrier functions in
Mather theory, and provide some fundamental properties. Based on barrier
functions, we could define some relations on the set of lines and thus classify
them. We also discuss some initial relations with the ideal boundary of the
Riemannian manifold.Comment: comments are welcome
Fixed-point Factorized Networks
In recent years, Deep Neural Networks (DNN) based methods have achieved
remarkable performance in a wide range of tasks and have been among the most
powerful and widely used techniques in computer vision. However, DNN-based
methods are both computational-intensive and resource-consuming, which hinders
the application of these methods on embedded systems like smart phones. To
alleviate this problem, we introduce a novel Fixed-point Factorized Networks
(FFN) for pretrained models to reduce the computational complexity as well as
the storage requirement of networks. The resulting networks have only weights
of -1, 0 and 1, which significantly eliminates the most resource-consuming
multiply-accumulate operations (MACs). Extensive experiments on large-scale
ImageNet classification task show the proposed FFN only requires one-thousandth
of multiply operations with comparable accuracy
From Hashing to CNNs: Training BinaryWeight Networks via Hashing
Deep convolutional neural networks (CNNs) have shown appealing performance on
various computer vision tasks in recent years. This motivates people to deploy
CNNs to realworld applications. However, most of state-of-art CNNs require
large memory and computational resources, which hinders the deployment on
mobile devices. Recent studies show that low-bit weight representation can
reduce much storage and memory demand, and also can achieve efficient network
inference. To achieve this goal, we propose a novel approach named BWNH to
train Binary Weight Networks via Hashing. In this paper, we first reveal the
strong connection between inner-product preserving hashing and binary weight
networks, and show that training binary weight networks can be intrinsically
regarded as a hashing problem. Based on this perspective, we propose an
alternating optimization method to learn the hash codes instead of directly
learning binary weights. Extensive experiments on CIFAR10, CIFAR100 and
ImageNet demonstrate that our proposed BWNH outperforms current state-of-art by
a large margin
- …